Existing 3D-aware image synthesis approaches mainly focus on generating a single canonical object and show limited capacity in composing a complex scene containing a variety of objects. This work presents DisCoScene: a 3Daware generative model for high-quality and controllable scene synthesis. The key ingredient of our method is a very abstract object-level representation (i.e., 3D bounding boxes without semantic annotation) as the scene layout prior, which is simple to obtain, general to describe various scene contents, and yet informative to disentangle objects and background. Moreover, it serves as an intuitive user control for scene editing. Based on such a prior, the proposed model spatially disentangles the whole scene into object-centric generative radiance fields by learning on only 2D images with the global-local discrimination. Our model obtains the generation fidelity and editing flexibility of individual objects while being able to efficiently compose objects and the background into a complete scene. We demonstrate state-of-the-art performance on many scene datasets, including the challenging Waymo outdoor dataset. Project page: https://snap-research.github.io/discoscene/
translated by 谷歌翻译
Video generation requires synthesizing consistent and persistent frames with dynamic content over time. This work investigates modeling the temporal relations for composing video with arbitrary length, from a few frames to even infinite, using generative adversarial networks (GANs). First, towards composing adjacent frames, we show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, brings a smooth frame transition without compromising the per-frame quality. Second, by incorporating the temporal shift module (TSM), originally designed for video understanding, into the discriminator, we manage to advance the generator in synthesizing more consistent dynamics. Third, we develop a novel B-Spline based motion representation to ensure temporal smoothness to achieve infinite-length video generation. It can go beyond the frame number used in training. A low-rank temporal modulation is also proposed to alleviate repeating contents for long video generation. We evaluate our approach on various datasets and show substantial improvements over video generation baselines. Code and models will be publicly available at https://genforce.github.io/StyleSV.
translated by 谷歌翻译
Our work targets at searching feasible adversarial perturbation to attack a classifier with high-dimensional categorical inputs in a domain-agnostic setting. This is intrinsically an NP-hard knapsack problem where the exploration space becomes explosively larger as the feature dimension increases. Without the help of domain knowledge, solving this problem via heuristic method, such as Branch-and-Bound, suffers from exponential complexity, yet can bring arbitrarily bad attack results. We address the challenge via the lens of multi-armed bandit based combinatorial search. Our proposed method, namely FEAT, treats modifying each categorical feature as pulling an arm in multi-armed bandit programming. Our objective is to achieve highly efficient and effective attack using an Orthogonal Matching Pursuit (OMP)-enhanced Upper Confidence Bound (UCB) exploration strategy. Our theoretical analysis bounding the regret gap of FEAT guarantees its practical attack performance. In empirical analysis, we compare FEAT with other state-of-the-art domain-agnostic attack methods over various real-world categorical data sets of different applications. Substantial experimental observations confirm the expected efficiency and attack effectiveness of FEAT applied in different application scenarios. Our work further hints the applicability of FEAT for assessing the adversarial vulnerability of classification systems with high-dimensional categorical inputs.
translated by 谷歌翻译
Diffusion models, which learn to reverse a signal destruction process to generate new data, typically require the signal at each step to have the same dimension. We argue that, considering the spatial redundancy in image signals, there is no need to maintain a high dimensionality in the evolution process, especially in the early generation phase. To this end, we make a theoretical generalization of the forward diffusion process via signal decomposition. Concretely, we manage to decompose an image into multiple orthogonal components and control the attenuation of each component when perturbing the image. That way, along with the noise strength increasing, we are able to diminish those inconsequential components and thus use a lower-dimensional signal to represent the source, barely losing information. Such a reformulation allows to vary dimensions in both training and inference of diffusion models. Extensive experiments on a range of datasets suggest that our approach substantially reduces the computational cost and achieves on-par or even better synthesis performance compared to baseline methods. We also show that our strategy facilitates high-resolution image synthesis and improves FID of diffusion model trained on FFHQ at $1024\times1024$ resolution from 52.40 to 10.46. Code and models will be made publicly available.
translated by 谷歌翻译
通过区分真实和合成样品,鉴别器在训练生成对抗网络(GAN)中起着至关重要的作用。尽管实际数据分布保持不变,但由于发电机的发展,合成分布一直变化,从而影响鉴别器的BI分类任务的相应变化。我们认为,对其容量进行即时调整的歧视者可以更好地适应这种时间变化的任务。一项全面的实证研究证实,所提出的培训策略称为Dynamicd,改善了合成性能,而不会产生任何其他计算成本或培训目标。在不同的数据制度下开发了两个容量调整方案,用于培训gan:i)给定足够数量的培训数据,歧视者从逐渐增加的学习能力中受益,ii)ii)当培训数据受到限制时,逐渐减少层宽度的宽度减轻。歧视者的过度问题。在一系列数据集上进行的2D和3D感知图像合成任务的实验证实了我们的动力学的普遍性以及对基准的实质性改进。此外,Dynamicd与其他歧视器改进方法(包括数据增强,正规化器和预训练)具有协同作用,并且在将学习gans合并时会带来连续的性能增长。
translated by 谷歌翻译
用深度学习方法证明自动定理最近引起了注意。在本文中,我们为三角身份构建了一个自动证明系统。我们定义了三角身份的归一化形式,为证明设计一组规则,并提出了一种可以生成理论上无限三角身份的方法。我们的目标不仅是完成证明,而且要在尽可能少的步骤中完成证明。因此,我们设计了一个模型来学习由随机BFS(RBF)生成的证明数据,并且在理论上和实验上证明了该模型在简单的模仿学习后可以胜过RBF。通过增强学习进一步改进,我们获得了Autotrig,这可以在几乎与BFS(理论上最短的方法)中为身份提供证明步骤,而时间成本仅为千分之一。此外,Autotrig在合成数据集中还击败Sympy,Matlab和人类,并且在许多概括任务中表现良好。
translated by 谷歌翻译
制作生成模型3D感知桥梁2D图像空间和3D物理世界仍然挑战。最近尝试用神经辐射场(NERF)配备生成的对抗性网络(GAN),其将3D坐标映射到像素值,作为3D之前。然而,nerf中的隐式功能具有一个非常局部的接收领域,使得发电机难以意识到全局结构。与此同时,NERF建立在体积渲染上,这可能太昂贵,无法产生高分辨率结果,提高优化难度。为了减轻这两个问题,我们通过明确学习结构表示和纹理表示,向高保真3D感知图像综合提出了一种作为Volumegan称为Volumegan的新颖框架。我们首先学习一个特征卷来表示底层结构,然后使用类似NERF的模型转换为特征字段。特征字段进一步累积到作为纹理表示的2D特征图中,然后是用于外观合成的神经渲染器。这种设计使得能够独立控制形状和外观。广泛的数据集的大量实验表明,我们的方法比以前的方法实现了足够更高的图像质量和更好的3D控制。
translated by 谷歌翻译
由于数据注释的高成本,半监督行动识别是一个具有挑战性的,但重要的任务是。这个问题的常见方法是用伪标签分配未标记的数据,然后将其作为训练中的额外监督。通常在最近的工作中,通过在标记数据上训练模型来获得伪标签,然后使用模型的自信预测来教授自己。在这项工作中,我们提出了一种更有效的伪标签方案,称为跨模型伪标记(CMPL)。具体地,除了主要骨干内,我们还介绍轻量级辅助网络,并要求他们互相预测伪标签。我们观察到,由于其不同的结构偏差,这两种模型倾向于学习来自同一视频剪辑的互补表示。因此,通过利用跨模型预测作为监督,每个模型都可以受益于其对应物。对不同数据分区协议的实验表明我们对现有替代方案框架的重大改进。例如,CMPL在Kinetics-400和UCF-101上实现了17.6 \%$ 17.6 \%$ 25.1 \%$ 25.使用RGB模态和1 \%$标签数据,优于我们的基线模型,FIXMATCT,以$ 9.0 \% $和10.3美元\%$。
translated by 谷歌翻译
类增量学习(CIL)旨在以相位逐相的方式学习多级分类器,其中仅在每个阶段提供类的子集的数据。以前的作品主要专注于初始之后减轻阶段的遗忘。但是,我们发现,在初始阶段改善CIL也是一个有希望的方向。具体而言,我们通过实验表明,在初始阶段直接鼓励CIL学习者将类似的表示类似的表示,因为在所有类别上训练的模型可以大大提升CIL性能。由此激励,我们研究了NA \“IVERY训练初始阶段模型和Oracle模型之间的差异。具体来说,由于这两个模型之间的一个主要区别是培训类的数量,我们研究了这种差异如何影响模型表示。我们发现,通过较少的培训类,每个班级的数据表示位于一个漫长而狭窄的地区;通过更多的培训类,每个阶级的陈述更统一地散射。灵感来自这种观察,我们提出了课堂上的去相关性(CWD)有效地规范了每个类的表示,以更统一地散射,从而模拟与所有类联合训练的模型(即Oracle模型)。我们的CWD易于实施,易于插入现有方法。各种各样的实验基准数据集显示CWD一直在且显着提高现有最先进方法的性能约为1 \%至3 \%。代码将被释放。
translated by 谷歌翻译
生成的对抗性网络(GANS)的成功基本上基于发电机(G)和鉴别者(D)之间的对抗训练。预计它们将达到一定的平衡,其中D不能将生成的图像与真实的图像区分开来。但是,在实践中,难以在GaN训练中实现如此平衡,而是几乎总是超过G.我们将这种现象归因于D和G之间的信息不对称。具体而言,我们观察到确定时的视觉注意力图像是真实还是假的,但G没有明确的线索,在哪个区域专注于特定合成。为了缓解D质量在GAN中竞争的问题,我们的目的是提高G的空间意识。随机采样的多级热手表被编码为G作为感应偏压的中间层。因此,G可以有目的地改善某些图像区域的合成。我们进一步建议将G的空间意识与D.通过这种方式对准G.通过这种方式,我们有效地减少了D和G之间的信息差距。广泛的结果表明,我们的方法将两位玩家游戏推动到均衡的GANS中的两个玩家游戏,导致综合性能更好。作为副产品,引入的空间意识有助于在输出合成上进行交互式编辑。演示视频和更多结果在https://genforce.github.io/eqgan/处。
translated by 谷歌翻译